JAMA Network Open
● American Medical Association (AMA)
Preprints posted in the last 7 days, ranked by how well they match JAMA Network Open's content profile, based on 127 papers previously published here. The average preprint has a 0.15% match score for this journal, so anything above that is already an above-average fit.
Laskaris, Z.; Baron, S.; Markowitz, S. B.
Show abstract
ObjectivesRising temperatures are a major climate-related hazard for U.S. workers, increasing heat-related illness and a broad range of occupational injuries through indirect pathways often overlooked in economic evaluations. We examined the association between temperature and occupational injury and illness and quantified heat-attributable injuries (including illnesses) and costs in New York State. MethodsWe conducted a time-stratified case-crossover study of 591,257 workers compensation (WC) claims during the warm season (2016-2024). Daily maximum temperature was linked to injury date and county and modeled using natural cubic splines, with effect modification by industry and worker characteristics. ResultsInjury risk increased with temperature, becoming statistically significant at approximately 78{degrees}F. Relative to 65{degrees}F, injury odds increased to 1.06 (95% CI: 1.01-1.10) at 80{degrees}F, 1.12 (1.07-1.18) at 90{degrees}F, and 1.17 (1.11-1.23) at 95{degrees}F. Overall, 5.0% of claims (2,322 annually) were attributable to heat. At temperatures [≥]80{degrees}F, an estimated 1,729 excess injuries occurred annually, generating approximately $46 million in WC costs. An estimated $3.2 million to $36.1 million in medical expenditures were associated with incomplete claims, likely borne outside the WC system. ConclusionsThese findings demonstrate substantial economic costs not fully captured within WC and support workplace heat protections as a cost-containment strategy that can reduce health care spending and strengthen workforce resilience.
Maldonado, A.; Heberer, K.; Lynch, J.; Cogill, S. B.; Nallamshetty, S.; Chen, Y.; Shih, M.-C.; Bress, A. P.; Lee, J.
Show abstract
ImportanceSemaglutide, a glucagon-like peptide-1 receptor agonist (GLP-1RA), is a highly effective medication to treat type 2 diabetes and obesity. However, concerns about potential suicidality persist, creating clinical uncertainty about its neuropsychiatric safety. ObjectiveTo assess risks of suicidality after initiating semaglutide compared to initiating SGLT2i and by duration of continuous semaglutide treatment. DesignActive-comparator, new-user target trial emulation to estimate inverse probability-weighted marginal cause-specific hazard ratios (HRs). For duration-of-treatment analyses, we used clone-censor-weight methods to estimate exposure-adjusted effects. SettingVeterans Health Administration. ParticipantsU.S. Veterans with type 2 diabetes receiving care from March 1, 2018 to September 1, 2025. ExposureInitiation of semaglutide vs SGLT2i; duration of semaglutide use ([≤]6, 7-12, >12 months). OutcomesIncident suicidal ideation; suicide attempt or death; and a composite outcome. ResultsA total of 102,361 Veterans met inclusion criteria, including 11,478 new initiators of semaglutide and 90,883 new initiators of an SGLT2i. After overlap weighting, baseline characteristics were well balanced between treatment groups (mean [SD] age, 60.1 [11.7] years; BMI, 37.8 [6.8] kg/m2; hemoglobin A1c, 7.0% [1.4]; 85.5% male; 61.9% non-Hispanic White). During a median follow-up of 2.2 years, 9077 incident suicidal ideation events and 696 suicide attempts or deaths occurred. The incidence rate of suicidal ideation was 56.3 and 37.7 per 1000 person-years among semaglutide initiators and SGLT2i initiators, respectively (hazard ratio [HR], 0.99; 95% CI, 0.93-1.06; P = 0.86). For suicide attempts or deaths, the incidence rates were 4.30 and 2.64 per 1000 person-years, respectively (HR, 1.05; 95% CI, 0.84-1.31; P = .86). In adherence-adjusted analyses, sustained semaglutide treatment for more than 12 months, compared with 6 or fewer months, was associated with a 74% lower risk of suicide attempts or deaths (HR, 0.27; 95% CI, 0.14-0.54; P<.001). ConclusionAmong U.S. Veterans with type 2 diabetes, initiators of semaglutide were not observed to have an increased risk of suicidality compared with initiators of SGLT2i. Those with longer semaglutide treatment (beyond 12 months) had decreased risk of suicide attempt or death, suggesting longer term treatment is safe and may protect against for those outcomes.
Honermann, B.; Grimsrud, A.; Lankiewicz, E.; Sherwood, J.; Millett, G.
Show abstract
IntroductionOn January 20, 2025, the U.S. government froze foreign assistance including for PEPFAR, though a limited waiver for "life-saving" interventions was subsequently granted. PEPFARs 2025 monitoring results, released April 17, 2026, covered only quarter 4 while an earlier inadvertent release included all four quarters. Combining both data sets, we systematically assess facility-level programmatic performance and reporting trends to quantify service disruptions accounting for reporting discrepancies. MethodsWe categorized facilities by reporting continuity across Q1 2024 and Q4 2025 (e.g. continuous, intermittent, dropped, or new) and assessed changes in service delivery by the category of health facility for key HIV treatment, testing, PMTCT, and prevention programming. We additionally analyze changes in employed human resources for health (HRH) reported by PEPFAR. ResultsPEPFAR data included 31,746 facilities and community service sites. 71.3% were classified as continuous reporters, 16.9% intermittent reporters, 2.5% community services, 3.9% dropped in 2025, and 3.1% new in 2025. Total number of people accessing HIV treatment declined modestly by -0.3%, but differed by facility category. Continuous facilities saw a 0.5% increase in people on treatment, while intermittent facilities saw a -1.7% decrease. HIV testing declined -17%. HIV diagnoses declined -13% in continuous facilities, -35% in community services, and -29% in intermittent facilities. PMTCT infant testing and diagnoses declined by -6% and -12% in continuous facilities, respectively, and -60% and -31% in intermittent facilities, respectively. PrEP initiations declined -33%. Total direct service delivery HCWs reduced -62,541 (-24%) ConclusionThese findings reveal substantial disruptions across PEPFAR service areas, with the steepest declines among intermittent and community-based delivery sites, alongside a 24% reduction in direct service delivery healthcare workers. As potentially the final data set PEPFAR will ever release, these findings represent a troubling inflection point. The dismantling of public data systems and accountability structures undermine progress and enable programmatic gaps to develop and go unnoticed that risk allowing HIV resurgence to occur over the coming years.
Cody, M. E.; Chang, H.-C.; Foldi, J.; Jankowitz, R. C.; Balic, M.; Cushing, T.; Donnelly, C.; Freeney, S.; Levine, J.; Petitti, L.; Ryan, N.; Spencer, K.; Turner, C.; Tseng, G. C.; Desmedt, C.; Oesterreich, S.; Lee, A. V.
Show abstract
BackgroundInvasive lobular breast cancer (ILC) is the most commonly diagnosed special histological subtype of breast cancer (BC). Metastatic ILC (mILC) is less sensitive to FDG-PET imaging and often metastasizes to unusual sites --peritoneum, gastrointestinal (GI) tract, ovaries, urinary tract, and orbit--which may go unrecognized after a long disease-free interval. Some metastatic sites cause nonspecific symptoms, like abdominal/epigastric pain, with numerous published case reports of mILC misdiagnosed as gastric cancer. These atypical BC metastatic sites may lead to late and/or misdiagnosis, thereby delaying effective treatments. ObjectiveWe developed a patient survey to investigate the patient-reported prevalence of delayed diagnosis or misdiagnosis of mILC and their potential impact upon treatment outcomes. MethodsA 45-question survey was developed and piloted with breast cancer researchers, clinical oncologists, and patient advocates. This IRB-approved survey was then distributed to patients with ILC. Analyses including data QC and visualization were conducted in R using descriptive statistics. Incomplete or inconsistent responses were excluded, and summary statistics were stratified by four common mILC sites to highlight subgroup differences. Results525 patient surveys were completed, with 450 patients diagnosed with ILC, and of those 321 diagnosed with mILC. For those with mILC, 33.3% (n=107) were diagnosed with de novo mILC at initial presentation. Of the patients diagnosed with mILC, 32.1% (n=103) presented with other medical conditions at diagnosis. Misdiagnosis was reported by 26.2% (n=84) of patients with mILC, and of these cases, 31% (n=26) had [≥]2 misdiagnoses. The top 5 misdiagnoses were bone-related condition (24.7%), benign breast condition (23.4%), another type of BC (7.8%), diagnostic delay (7.8%), and menopause related (5.2%). 44.5% of patients waited [≥]1 year for an accurate diagnosis. 49 patients were treated for their misdiagnosis, and 6 received incorrect cancer treatments. The most frequently reported contributors to delayed or misdiagnosis were inconclusive imaging, providers lack of ILC knowledge, and initial misdiagnosis. Of the 321 patients with mILC, 138 (42.9%) reported symptoms before diagnosis; the most common were back pain (16.5%), fatigue/malaise (14.9%), GI symptoms (11.8%), bloating (8.4%), and weight loss (8.1%). Although 40% of patients reported having a mammogram at the time of their initial misdiagnosis, ILC was detected in only 20.5% (24/116) of these cases, and mammography detected only 5 (25%) of the 20 de novo mILC cases. Patients reported additional diagnostic testing within 1-3 months of their initial mammogram, includingbiopsy, ultrasound (US), and MRI. 47.9% of patients were in active BC surveillance after curative intent therapy at the time of their mILC diagnosis; however, no statistical difference was seen in time to diagnosis versus those patients not under surveillance. ConclusionOur survey results underscore the urgent need to improve diagnostic strategies for mILC. Addressing delays and diagnostic errors in mILC is critical to optimizing treatment strategies and improving patient outcomes.
Yoshimoto, H.; Hadano, T.; Shimada, K.; Gosho, M.; Fukuda, T.; Komano, Y.; Umeda, K.; Iwase, M.; Kusano, Y.; Kawabata, T.
Show abstract
BackgroundPractical alcohol risk-reduction strategies are widely recommended in public-facing alcohol guidance, but randomized evidence from socially interactive drinking episodes remains limited. We conducted a pilot cluster randomized trial to evaluate the feasibility and preliminary effects of a package intervention comprising practical drinking-strategy information, participant self-selection of same-day strategies, and a brief commitment declaration in a social drinking laboratory. MethodsThis single-center, parallel-group pilot trial was conducted in Japan. Pre-existing social groups participated. One or two groups scheduled in the same session slot were combined into a time-slot allocation unit, which was randomized 1:1 either to the package intervention or to alcohol-related knowledge only. The primary outcome was total pure alcohol intake during the first 120 min. Session satisfaction on a Visual Analog Scale (VAS) was a prespecified secondary participant-experience outcome. ResultsOf 83 interested individuals, 63 were randomized and 59 participants in 17 social groups and 12 allocation units were included in the modified intention-to-treat analysis. The mean paired intervention-control difference for 120-min alcohol intake was-8.84 g (95% confidence interval [CI]-27.92 to 10.23; exact sign-flip p = 0.281). The corresponding exploratory 0-30 min difference was-4.90 g (95% CI-10.48 to 0.68; exact sign-flip p = 0.094). In a genotype-adjusted participant-level sensitivity analysis, the intervention coefficient for 120-min intake was-16.0 g (95% CI-30.9 to-1.1; p = 0.036). Session satisfaction was high in both arms with no clear between-arm difference. Next-day follow-up was 100%, and no adverse-event-related discontinuations occurred. ConclusionsThe intervention was feasible to deliver in a socially interactive drinking setting, and session satisfaction was high in both arms. Primary allocation-unit estimates favored lower alcohol intake but were imprecise. Larger trials are needed to estimate effects more precisely, while considering the potential influence of genotype imbalance on effect estimation in East Asian samples. Trial registrationUniversity Hospital Medical Information Network Clinical Trials Registry (UMIN-CTR) UMIN000060685. Registered 17 February 2026.
Li, J.; Steimle, L. N.; Carrel, M.; Byrd, R. A.; Radke, S. M.
Show abstract
PurposeTo characterize maternal transport patterns in Iowa, a state with levels of maternal care and without formal perinatal regions, and assess whether transport decisions reflect efficient, risk-appropriate coordination. MethodsWe analyzed 2010-2023 Iowa birth records, which included 2,251 maternal transports between obstetric facilities across 106 unique routes. We characterized transport patterns and applied a community detection algorithm to identify "communities" of obstetric facilities that disproportionately transport among themselves. FindingsSuburban and rural counties have elevated transport rates compared to urban counties. 2,189 transports (97%) were from lower-to higher-level facilities. Among these, 2,037 (93%) were to Level III tertiary care centers. 567 transports (25.2%) bypassed a closer facility offering an equivalent or higher level of care than its destination facility. Health system affiliation was associated with bypassing transport, indicating potential organizational rather than purely geographic drivers of transport decisions. Three "communities" of obstetric facilities largely shaped by geographic proximity were identified. ConclusionsAlthough Iowa does not have formal perinatal regions, patterns of maternal transport are mostly in line with three de facto regions. Some potential inefficiencies were identified, such as obstetric facilities transporting to a farther facility when a closer facility offered the same level of care or higher. These findings may help identify opportunities to enhance care coordination among obstetric facilities, optimize maternal transport networks, and improve regionalization of maternal care.
Khattab, A.; Wang, Z.; Srinivasasainagendra, V.; Tiwari, H. K.; Loos, R.; Limdi, N.; Irvin, M. R.
Show abstract
BackgroundDiabetic kidney disease (DKD) is a leading cause of kidney failure in individuals with type 2 diabetes (T2D), yet risk identification in routine clinical practice remains incomplete. A critical and often overlooked barrier is risk observability: how much of a patients underlying risk is actually captured in their clinical record at the time of screening. Existing prediction models evaluate performance using model-specific thresholds, making it difficult to understand how additional data sources alter real-world screening behavior or which individuals benefit when models are expanded. MethodsWe developed a series of five nested machine learning models evaluated at a one-year landmark following T2D diagnosis using data from the All of Us Research Program (N = 39,431; cases = 16,193). Each successive model added a distinct information layer -- intrinsic risk, laboratory snapshots, medication exposure, longitudinal care trajectories, and social determinants of health (SDOH) -- while retaining all prior features. All models were evaluated under a fixed screening policy targeting 90% specificity, so that the false positive rate remained constant as the information available to the model grew. External validation was conducted in the BioMe Biobank (N = 9,818) without retraining. ResultsDiscrimination improved consistently across layers, from AUROC 0.673 (M1) to 0.797 (M5). Under the fixed screening policy, sensitivity nearly doubled from 0.27 to 0.49, with a cumulative recovery of 30.4% of cases missed by the base model. Gains were driven by distinct subgroups at each transition: laboratory features identified biologically high-risk individuals; medication features captured those with high treatment intensity reflecting advanced cardiometabolic burden; longitudinal care trajectory features rescued cases with biological instability observable only through repeated measurements; and SDOH features recovered individuals with limited clinical observability, with rescue probability highest among those with the fewest recorded monitoring domains. Sparse data in the clinical record indicated low observability, not low risk. Social and genetic features each contributed most when downstream physiologic signal was limited, supporting a contextual rather than universal role for each. In BioMe, discrimination was attenuated (M4 AUROC 0.659), but the relative ordering of information layers was fully preserved, and a systematic upward shift in predicted probability distributions underscored the need for recalibration before deployment in a new setting. ConclusionsDKD risk detection in T2D is substantially improved by integrating complementary information layers under a fixed clinical screening policy, with gains arising from distinct domains that identify at-risk individuals in different clinical contexts. The layered landmark framework introduced here reveals how risk observability -- shaped by monitoring intensity, healthcare engagement, and access -- determines what a screening model can detect, and provides a foundation for context-aware EHR-based screening that accounts for data availability at the time of risk assessment. O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=140 SRC="FIGDIR/small/26351384v1_ufig1.gif" ALT="Figure 1"> View larger version (51K): org.highwire.dtl.DTLVardef@1cc7f4borg.highwire.dtl.DTLVardef@b92956org.highwire.dtl.DTLVardef@48ffbcorg.highwire.dtl.DTLVardef@8dc627_HPS_FORMAT_FIGEXP M_FIG O_FLOATNOGraphical abstract.C_FLOATNO Study design and layered DKD screening framework The top row defines the cohort timeline, in which predictors are derived from clinical data collected between T2D diagnosis and the 1-year landmark, and incident DKD is ascertained after the landmark. The second row depicts the nested model architecture, in which five successive models sequentially incorporate intrinsic risk, laboratory snapshot features, medication exposure, longitudinal care trajectories, and social determinants of health, while retaining all features from prior layers. The third row summarizes model development in the All of Us Research Program (N = 39,431) and external validation in the BioMe Biobank (N = 9,818), where the same trained models and risk thresholds were applied without retraining. The bottom row highlights the three evaluation domains: predictive performance, fixed-policy screening, and missed-case recovery context. DKD, diabetic kidney disease; T2D, type 2 diabetes; PRS, polygenic risk scores; AUROC, area under the receiver operating characteristic curve; AUPRC, area under the precision-recall curve; PPV, positive predictive value; SHAP, SHapley Additive exPlanations. C_FIG
DeCuir, J.; Reeves, E. L.; Weber, Z. A.; Yang, D.-H.; Irving, S. A.; Tartof, S. Y.; Klein, N. P.; Grannis, S. J.; Ong, T. C.; Ball, S. W.; DeSilva, M. B.; Dascomb, K.; Naleway, A. L.; Koppolu, P.; Salas, S. B.; Sy, L. S.; Lewin, B.; Contreras, R.; Zerbo, O.; Hansen, J. R.; Block, L.; Jacobson, K. B.; Dixon, B. E.; Rogerson, C.; Duszynski, T.; Fadel, W. F.; Barron, M. A.; Mayer, D.; Chavez, C.; Yates, A.; Kirshner, L.; McEvoy, C. E.; Akinsete, O. O.; Essien, I. J.; Sheffield, T.; Bride, D.; Arndorfer, J.; Van Otterloo, J.; Natarajan, K.; Ray, C. S.; Payne, A. B.; Adams, K.; Flannery, B.; Garg,
Show abstract
Background: The 2024-25 influenza season was the most severe in the United States (US) since 2017-18, with co-circulation of both influenza A virus subtypes (H1N1 and H3N2). Influenza vaccine effectiveness (VE) has varied by season, setting, and patient characteristics. Methods: Using electronic healthcare encounter data from eight US states, we evaluated influenza vaccine effectiveness (VE) against influenza-associated hospitalizations and emergency department or urgent care (ED/UC) encounters from October 2024-April 2025 among children aged 6 months-17 years and adults aged 18+ years. Using a test-negative, case-control design, we compared the odds of influenza vaccination between acute respiratory illness (ARI) encounters with a positive (cases) versus negative (controls) test for influenza by molecular assay, adjusting for confounders. Results: Analyses included 108,618 encounters (5,764 hospitalizations and 102,854 ED/UC encounters) among children and 309,483 encounters (76,072 hospitalizations and 233,411 ED/UC encounters) among adults. Among children across care settings, 17.0% (6,097/35,765) of cases versus 29.4% (21,449/72,853) of controls were vaccinated. Among adults, 28.2% (21,832/77,477) of cases versus 44.2% (102,560/232,006) of controls were vaccinated. VE was 51% (95% confidence interval [95% CI]: 41-60%) against influenza-associated hospitalizations and 54% (95% CI: 52-55%) against influenza-associated ED/UC encounters among children. VE was 43% (95% CI: 41-46%) against influenza-associated hospitalizations and 49% (95% CI: 47-50%) against influenza-associated ED/UC encounters among adults. Conclusions: Influenza vaccination provided protection against influenza-associated hospitalizations and ED/UC encounters among children and adults in the US during the severe 2024-25 influenza season. These findings support influenza vaccination as an important tool to reduce influenza-associated disease.
Purnell, J. Q.; Getahun, D.; Vesco, K. K.; Qiu, S.; Shi, J. M.; Wong, C. P.; Koppolu, P.; Im, T. M.; Oshiro, C. E.; Boone-Heinonen, J.
Show abstract
Preconception weight loss by metabolic-bariatric surgery (MBS) improves maternal-fetal outcomes, but little is known about its impact on offspring growth and health. The preconception bariatric surgery and child health outcomes (POSIT) study aims to estimate the effects of maternal MBS-induced preconception weight loss on infant and childhood body size, growth, and related outcomes. This report presents the methods used to construct the POSIT cohort and its baseline characteristics. This retrospective cohort study sampled members from a United States healthcare system aged 18 and older with a singleton, live birth to create three study groups: 1) a treatment group including women who underwent preconception MBS and subsequently became pregnant (n=1,374); 2) a control group matched to the MBS pre-surgery body mass index (BMI) (pre-surgery controls, n=13,740); and 3) a second control group matched to the MBS post-surgical, pre-pregnancy BMI (pre-pregnancy controls, n=13,740). MBS and pre-surgery BMI controls showed slight imbalances in that pre-surgery BMI controls were on average ~6 months younger, had 0.6 lower BMI (44.5 kg/m2) at the time of their pregnancy and were more likely to have become pregnant in earlier years than the MBS group prior to surgery. MBS and pre-pregnancy controls had comparable age (mean {+/-} SD 33 {+/-} 5 years), pre-pregnancy BMI (33 {+/-} 6 kg/m2), and year of delivery. Following matching, the MBS group had similar socioeconomic and health disparities as the pre-surgery control group, and both were worse than pre-pregnancy control group. Pregestational maternal comorbidity index improved after MBS and matched the pre-pregnancy controls. Upon extraction of offspring growth patterns and mediation analyses of maternal weight loss and metabolic responses to MBS, study findings will investigate effects of preconception weight loss by MBS on short- and long-term child health outcomes. Results will guide future studies focusing on improving maternal preconception weight and maternal-fetal outcomes.
Armstrong, M.; Williams, H.; Fernandez Faith, E.; Ni, A.; Xiang, H.
Show abstract
BackgroundLasers have wide applications in medicine and dermatology, but are associated with pain and anxiety, particularly in younger patients. Pain mitigation is often limited to topical anesthetics in the outpatient setting. Distraction techniques are limited by the need for ocular protection, which can include adhesive eye patches that can completely occlude vision. Virtual reality is effective at managing procedural pain and anxiety under other short medical procedures and is a promising tool for this population. ObjectiveThis trial aims to assess the safety, feasibility, and efficacy of Virtual Reality Pain Alleviation Therapeutic (VR-PAT) for pain management during outpatient laser procedures. Methods40 patients requiring outpatient laser therapy for at least two sessions will be recruited from a pediatric hospital in the midwestern United States for this crossover randomized, two-arm clinical trial with a 1:1 allocation ratio. During the first laser visit, the participant will be randomly assigned to either play the VR-PAT game during their procedure or wear the headset with a dark screen. Participants will answer questions about their pain (Numeric Rating Scale (NRS) 0-10), anxiety (State Trait Anxiety Inventory for Children, NRS 0-10, Modified Yale Preoperative Anxiety Scale (mYPAS)), and pain medication usage. Those playing the VR-PAT will additionally report simulator sickness symptoms and their experience playing the game. At their second laser visit, participants will crossover to the opposite intervention from their first visit. The primary outcomes are the difference in self-reported pain and anxiety between the two interventions. Feasibility outcomes include the proportion of screened patients who are eligible, consent, and complete both visits and adverse events reported. To evaluate the efficacy of pain reduction, composite scores of pain score, pain medication will be calculated for each laser visit. To evaluate the efficacy of anxiety reduction, the change of mYPAS scores will be compared between control and VR groups at each visit using Wilcoxon rank sum tests. All statistical analyses will follow the intention-to-treat principle in regard to intervention assignment at each visit. ResultsThe study was funded in January 2023 and began enrollment at that time. A total of n=44 participants were recruited and data collection was completed in November 2025, with n=40 subjects completing both visits. The sample was balanced with n=40 subjects using the intervention and participating in the control condition. The age range of the complete sample was 6 to 21 years at recruitment and was 55% female sex. Data analysis is in progress with final results planned for June 2026. ConclusionsFindings from this innovative randomized clinical trial will provide early evidence on the efficacy of the VR-PAT for reducing self-reported pain and anxiety during outpatient laser procedures. The results from this trial will inform a large-scale, multisite study. Trial RegistrationClinicalTrials.gov: NCT05645224 [https://clinicaltrials.gov/study/NCT05645224]
Glick, C. C.; Pirzada, S. T.; Quah, S. K.; Feldman, S.; Enabulele, I.; Madsen, S.; Billimoria, N.; Feldman, S.; Bhatia, R.; Spiegel, D.; Saggar, M.
Show abstract
BackgroundScalable, low-burden behavioral interventions are needed to address rising subclinical mental health symptoms. However, few randomized controlled trials have evaluated ultra-brief, remotely delivered, meditation using multimodal outcome assessment under real-world conditions. MethodsWe conducted a fully remote randomized controlled trial (ClinicalTrials.gov: NCT06014281) evaluating a focused-attention meditation intervention delivered via brief instructor training and independent daily practice. A total of 299 meditation-naive adults were randomized to immediate intervention or waitlist control in a delayed-intervention design. Participants practiced [≥]10 minutes daily for 8 weeks within a 16-week study. Outcomes included validated self-report measures, web-based cognitive tasks, and wearable-derived physiological metrics. ResultsAcross randomized and within-participant replication phases, the intervention was associated with significant reductions in anxiety and mind wandering, with effects remaining stable during 8-week follow-up. Improvements were greatest among participants with higher baseline symptom burden. Sleep disturbance improved selectively among individuals with poorer baseline sleep. Secondary outcomes, including rumination, perceived stress, social connectedness, and quality of life, also improved. Cognitive performance showed modest improvements primarily among lower-performing participants. Resting heart rate exhibited nominal reductions. ConclusionsAn ultra-brief, fully remote meditation intervention requiring 10 minutes per day was associated with sustained improvements in psychological functioning and smaller, baseline-dependent effects on cognition in a non-clinical population. These findings support digital delivery of low-dose meditation as a scalable preventive mental health strategy.
Thaqi, F.; Bieber, K.; Kerniss, H.; Kridin, K.; Curman, P.; Ludwig, R.
Show abstract
BackgroundClinical and genetic evidence on the association between atopic dermatitis (AD) and subsequent psoriasis remains conflicting, and it is unclear whether this risk is modified by systemic treatments. Recent reports suggest type 2-targeted biologics may unmask psoriasis in AD patients, but data are limited. We thus aimed to assess whether AD is associated with incident psoriasis and whether this risk differs by systemic treatment, particularly biologics versus conventional systemic immunosuppressants (cvIS). MethodsScoping analyses informed a locked analytic design, preregistration at OSF, and confirmatory execution. Propensity score-matched analyses compared AD with non-AD controls and biologics with cvIS. Sensitivity analyses, Cox model triangulation, and control outcomes assessed robustness. FindingsAmong [~]300,000 matched pairs, AD was associated with increased psoriasis risk (primary HR 3.81, 95% CI 3.35-4.34), consistent across all 8 sensitivity analyses and model triangulation. Biologic treatment was associated with reduced psoriasis risk versus cvIS (primary HR 0.20, 95% CI 0.11-0.35), consistent across 6 of 7 evaluable sensitivity analyses and Cox triangulation. Positive and negative control outcomes showed expected directional patterns. InterpretationAcknowledging limitations including residual confounding and coding misclassification, AD was associated with increased psoriasis risk and biologics with lower psoriasis risk than cvIS. FundingDFG (EXC2167, SFB1526, LU877/25-1), Schleswig-Holstein Excellence-Chair Program, Swedish Society for Dermatology and Venereology, and the Tore Nilson Foundation. Research in contextO_ST_ABSEvidence before this studyC_ST_ABSAtopic dermatitis (eczema) and psoriasis are the two most common chronic inflammatory skin diseases worldwide. For a long time, doctors and researchers assumed these two conditions could not occur in the same person, as they were thought to involve opposing immune responses. However, this view has been challenged over the past decade. Some large studies, including population-based cohorts from Taiwan and the United Kingdom, have found that people with eczema may be at higher risk of developing psoriasis over time, while other studies, including genetic analyses, have suggested the opposite: that the two diseases may actually protect against each other. This conflicting picture has left clinicians uncertain about the true relationship between the two diseases in everyday clinical practice. A separate but related concern has emerged with the introduction of a new class of highly effective treatments for eczema, biologics, particularly dupilumab. Case reports and observational studies, including a large study published in JAMA Dermatology in 2025, have raised the possibility that these medications might trigger psoriasis in some patients, potentially by shifting the immune system from one inflammatory pattern to another. However, prior studies on this question had important methodological limitations: they were not pre-planned and registered before data collection, they did not always tightly link treatment use to an eczema diagnosis, and critically, none compared biologic treatment directly against conventional immunosuppressant medications, the most relevant clinical comparator. Added value of this studyThis study is a large and methodologically rigorous investigation of both questions: whether eczema itself increases the risk of developing psoriasis, and whether the type of systemic treatment used for eczema influences that risk. Using a database of over 110 million electronic health records from across the United States, we matched approximately 300,000 patients with eczema to 300,000 patients without eczema and followed them for up to seven years. We also compared nearly 5,500 patients treated with biologics to an equal number treated with conventional immunosuppressants. Crucially, our study was pre-registered before any data were analyzed, meaning the research questions, methods, and analyses were locked in advance and could not be adjusted based on what the data showed. We also used a range of additional analyses to test whether our findings were robust, including checks using outcomes that should not be affected by eczema or its treatment (such as appendectomy and hearing loss), which confirmed that our results were not likely explained by bias alone. We found that eczema was associated with an increased risk of developing psoriasis, but that this risk was substantially influenced by the choice of comparison group, ranging from approximately 1.4-fold to nearly 4-fold depending on the analytical approach. More strikingly, we found that patients treated with biologics had a markedly lower risk of developing psoriasis compared with those treated with conventional immunosuppressants, the opposite of what prior reports had suggested. This finding was consistent across nearly all additional analyses performed. Implications of all the available evidenceTaken together with existing evidence, these findings suggest two important conclusions. First, clinicians should be aware that eczema, particularly moderate-to-severe eczema requiring systemic treatment, may carry an elevated risk of developing psoriasis over time. This does not mean that all patients with eczema need to be screened for psoriasis routinely, but it does support clinical awareness and monitoring in higher-risk patients. Second, and perhaps most importantly for treatment decisions, biologics do not appear to increase the risk of psoriasis compared with conventional immunosuppressants and may in fact be associated with a lower risk. This provides reassurance for patients and clinicians considering biologic therapy and challenges the narrative that these medications trigger psoriasis. Future research should aim to confirm these findings in other populations, investigate the biological mechanisms underlying the relationship between eczema and psoriasis, and examine whether specific biologic agents differ from one another in their effects on psoriasis risk.
Luo, M.; Trindade Pons, V.; Zakharin, M.; Pingault, J.-B.; Gillespie, N. A.; van Loo, H. M.
Show abstract
Substance use disorders run in families, yet the mechanisms underlying intergenerational transmission remain unclear. We investigated indirect genetic effects, pathways through which parental genotypes influence offspring phenotypes via the family environment, for alcohol use disorder (AUD), nicotine dependence (ND), and related quantitative outcomes, and aimed to identify family environmental factors through which such effects may operate. Using transmitted and non-transmitted polygenic scores (PGS) constructed for problematic alcohol use, tobacco use disorder, and general addiction liability, we analyzed 5972 European-ancestry adult offspring with at least one genotyped parent from the population-based Lifelines cohort (Netherlands). Offspring outcomes included lifetime DSM-5 AUD diagnosis, AUD symptom count, maximum drinks in 24 hours, Fagerstrom Test for Nicotine Dependence score, and cigarettes per day. AUD findings were meta-analyzed with data from the Brisbane Longitudinal Twin Study (N = 1368; Australia). We also examined parent-of-origin effects and mediation by parental substance use and socioeconomic status using structural equation modeling. Transmitted PGS robustly predicted all AUD and ND outcomes ({beta} = 0.07-0.16; OR = 1.20 for AUD diagnosis). Non-transmitted PGS, indexing indirect genetic effects, were negligible for all clinical syndrome outcomes. The only significant indirect genetic effect was on cigarettes per day ({beta} = 0.03, p = 0.01), mediated by parental smoking behavior but not socioeconomic status. These findings indicate that intergenerational transmission of risk for AUD and ND is driven primarily by direct genetic effects, with modest indirect genetic effects on smoking quantity. Larger samples and cross-trait analyses are needed to further elucidate these mechanisms.
Qadeer, A.; Gohar, N.; Maniyar, P.; Shafi, N.; Juarez, L. M.; Mortada, I.; Pack, Q. R.; Jneid, H.; Gaalema, D. E.
Show abstract
Introduction: Smoking cessation after acute coronary syndrome (ACS) is a Class I recommendation, yet prescription pharmacotherapy use remains low and its real-world cardiovascular effectiveness when added to nicotine replacement therapy (NRT) is poorly characterized. Methods: We conducted a retrospective cohort study using the TriNetX US Collaborative Network (67 healthcare organizations). Adults hospitalized with ACS who received NRT within one month, serving as a proxy for active smoking status, were identified. Two co-primary propensity-matched (1:1, 50 covariates, caliper 0.10 SD) comparisons evaluated bupropion + NRT and varenicline + NRT individually versus NRT alone; a supportive analysis evaluated combined pharmacotherapy versus NRT alone. All-cause mortality was the primary endpoint. Secondary outcomes included MACE, heart failure exacerbations, major bleeding, TIA/stroke, emergency rehospitalizations, and cardiac rehabilitation utilization, assessed at 6 months and 1 year via Kaplan-Meier analysis. Hazard ratios (HRs) greater than 1.0 indicate higher hazard in the NRT-only group. Results: After matching, the combined analysis comprised 8,574 pairs, the bupropion analysis 4,654 pairs, and the varenicline analysis 2,126 pairs. At 1 year, the combined pharmacotherapy group had significantly lower all-cause mortality (HR 1.26, 95% CI 1.16-1.37), MACE (HR 1.16, 95% CI 1.12-1.21), heart failure exacerbations (HR 1.16, 95% CI 1.08-1.25), major bleeding (HR 1.18, 95% CI 1.08-1.28), and greater cardiac rehabilitation utilization (HR 0.82, 95% CI 0.74-0.92; all p < 0.001). TIA/stroke did not differ significantly. Six-month results were consistent. Both varenicline and bupropion individually showed lower mortality and MACE. A urinary tract infection falsification endpoint showed no between-group differences, supporting matching validity. The pharmacotherapy group had higher rates of new-onset depression, driven predominantly by bupropion recipients. Conclusions: In this propensity-matched real-world analysis, adding prescription smoking cessation pharmacotherapy to NRT after ACS was associated with lower mortality and fewer adverse cardiovascular events, supporting broader integration into post-ACS care pathways.
Rehman, N.; Guyatt, G.; JinJin, M.; Silva, L. K.; Gu, J.; Munir, M.; Sadagari, R.; Li, M.; Xie, D.; Rajkumar, S.; Lijiao, Y.; Najmabadi, E.; Dhanam, V.; Mertz, D.; Jones, A.
Show abstract
BackgroundSustained retention in care supports continuous access to antiretroviral therapy, routine clinical monitoring, and long-term viral suppression. ObjectiveTo compare the effectiveness of interventions for improving retention in care among people living with HIV (PLHIV). DesignSystematic review and network meta-analysis Data sourcesPubMed, Embase, CINAHL, PsycINFO, Web of Science, and the Cochrane Library from 1995 to December 2024. Eligibility criteriaRandomised controlled trials (RCTs) evaluating interventions to improve retention in care, viral load suppression, or quality of life (QoL) among PLHIV, compared with standard of care (SoC) or other interventions. Data extraction and synthesisPairs of reviewers independently screened studies, extracted data, and assessed risk of bias using ROBUST-RCT. We conducted a fixed-effect frequentist network meta-analysis and rated interventions categories relative to SoC based on effect estimates effects and the certainty of evidence.. Dichotomous outcomes were summarized as odds ratios (ORs) with 95% confidence intervals (CIs), and continuous outcomes as mean differences (MDs) with 95% CI. ResultsEighty-four trials enrolling 107 137 PLHIV evaluated 13 intervention categories. For retention in care, five interventions supported by moderate or high certainty evidence proved superior to SoC: multi-month dispensing (OR 2.02, 95% CI 1.32 to 3.09), task shifting (OR 1.94, 95% CI 1.42 to 2.66), differentiated service delivery (OR 1.47, 95% CI 1.22 to 1.76), behavioural counselling (OR 1.36, 95% CI 1.21 to 1.54), and supportive interventions (OR 1.31, 95% CI 1.11 to 1.55). For viral load suppression, two interventions supported by moderate or high certainty evidence proved superior to SoC: task shifting (OR 2.07, 95% CI 1.25 to 3.43) and behavioural counselling (OR 1.34, 95% CI 1.11 to 1.67). Across outcomes, no intervention demonstrated convincing superiority over other active interventions. ConclusionsAmong 13 intervention categories, only a subset provided moderate or high-certainty evidence of superiority to the standard of care, and no superiority to other interventions. Persistent evidence gaps for key populations, diverse settings, and long-term outcomes support the need for context-sensitive and patient-centred interventions. RegistrationPROSPERO CRD42024589177 Strengths and limitations of this study[tpltrtarr] This systematic review followed Cochrane methods and was reported in accordance with PRISMA-NMA guidelines. [tpltrtarr]The network meta-analysis integrated direct and indirect evidence to compare multiple intervention categories within a single framework. [tpltrtarr]Risk of bias and certainty of evidence were assessed using ROBUST-RCT and the GRADE approach for network meta-analysis, respectively. [tpltrtarr]Some networks were sparse, and limited representation of key populations and long-term follow-up constrained the strength and generalisability of inferences.
Li, N.
Show abstract
BackgroundMindfulness-based interventions (MBIs) have been increasingly adopted in educational settings to support cognitive development in youth. Executive function (EF)--encompassing inhibitory control, working memory, and cognitive flexibility--is a plausible target of MBI given its reliance on attention regulation. However, prior reviews have yielded mixed conclusions, partly due to inconsistent construct definitions and the pooling of heterogeneous outcome measures. ObjectivesTo (1) estimate the pooled effect of MBI on EF in youth aged 3-18 years using only construct-validated, direct EF measures, (2) examine potential moderators including age group, EF domain, and risk of bias, and (3) test dose-response relationships via meta-regression on intervention duration. MethodsWe searched PubMed, PsycINFO, CINAHL, Scopus, and Web of Science from inception to March 2026, supplemented by reference-list searches from two existing systematic reviews and a scoping review. Only English-language publications were eligible. Eligible studies were randomised controlled trials (RCTs) or quasi-RCTs of MBI (excluding yoga-only interventions) in typically developing youth, with at least one direct behavioural or computerised EF outcome. Risk of bias was assessed using Cochrane RoB 2. Hedges g was computed for each study, and pooled using a DerSimonian-Laird random-effects model. Subgroup analyses by age group, EF domain, and risk of bias were conducted, alongside leave-one-out sensitivity analyses, Eggers regression test, trim-and-fill, and Knapp-Hartung-adjusted meta-regression on intervention duration. Evidence certainty was rated using GRADE. ResultsThirteen RCTs (nine school-age, four preschool; total N = 1,560) met inclusion criteria. The pooled effect was g = 0.365 (95% CI 0.264 to 0.465; p < .00001), with negligible heterogeneity (I2 = 0.0%; Q = 6.76, p = .87). Effects were consistent across age groups (school-age g = 0.389; preschool g = 0.318) and EF domains (inhibitory control, working memory, cognitive flexibility; pbetween = .60). Meta-regression on intervention duration (4-20 weeks) was non-significant (p = .79). The effect was robust in leave-one-out analyses, in the low risk-of-bias subgroup (g = 0.361; k = 8), and after trim-and-fill adjustment (g = 0.354). The 95% prediction interval (0.252 to 0.477) was entirely positive. GRADE certainty was rated MODERATE, downgraded once for risk of bias. ConclusionsMBIs appear to produce a small, statistically significant improvement in EF in youth aged 3-18 years, with moderate certainty of evidence per the GRADE framework. The effect is consistent across preschool and school-age samples and across EF domains, with no significant dose-response relationship within the 4-20 week range studied. Emerging mediation evidence suggests that EF improvement may serve as an important pathway through which MBI supports emotion regulation, though this requires replication. Further large-scale, pre-registered RCTs with active control conditions and longitudinal follow-up are warranted.
Bailey, M.; Hammerton, G.; Fairchild, G.; Tsunga, L.; Hoffman, N.; Burd, T.; Shadwell, R.; Danese, A.; Armour, C.; Zar, H. J.; Stein, D. J.; Donald, K. A.; Halligan, S. L.
Show abstract
ObjectiveThere is little longitudinal research investigating links between violence exposure and mental disorders among children in low- and middle-income countries (LMICs), despite high rates of violence. We examined cross-sectional and longitudinal violence-mental health associations among children in a large South African birth cohort, the Drakenstein Child Health Study, including direct clinical interviews capturing childrens mental disorders. MethodIn this birth cohort (N=974), we assessed lifetime violence exposure and four subtypes (witnessed community, community victimization, witnessed domestic, domestic victimization) at ages 4.5 and 8-years via caregiver reports. At 8-years, caregivers completed the Child Behaviour Checklist; and psychiatric disorders were assessed using the Mini-International Neuropsychiatric Interview for Children and Adolescents, a self-report measure. We tested for associations using linear/logistic regressions, adjusted for confounders. ResultsMost children (91%) had experienced violence by 8-years. Cross-sectionally, total violence exposure was associated with total (B =0.49 [95% CI 0.32, 0.66]), internalizing (0.32 [0.17, 0.47]), and externalizing problems (0.46 [0.31, 0.61]), and with increased odds of disorder at 8 years (aOR=1.09 [1.05, 1.13]). Longitudinally, total violence exposure up to 4.5-years was associated with total (B=0.27 [0.03, 0.52]), internalizing (0.24 [0.04. 0.44]), and externalizing scores (0.23 [0.008, 0.45]) at 8-years, but not with increased risk of psychiatric disorders. The strongest and most consistent associations were observed for domestic versus community violence subtypes. ConclusionOur strong cross-sectional but weaker longitudinal findings suggest that recent violence exposures may be more critical than early exposures for childrens mental health. Longitudinal exploration of other violence-affected LMIC populations is urgently needed.
Mossler, K.; D'Orazio, E.; Hall, K.; Osann, K.; Kimonis, V.; Quintero-Rivera, F.
Show abstract
Objective The decline of the perinatal demise rate is slowing and demises are often unexplained. Significant research has been done regarding diagnostic yield and genetic causes of demise, but little is known about how Geneticist involvement impacts outcomes. The goal of the study was to evaluate post-mortem genetic testing practices and effects of the geneticists involvement. Methods Retrospective data from 111 perinatal demise cases was examined, including rates of prenatal genetic counseling, post-delivery genetics consult, genetic testing, and autopsy investigation. Results In this cohort 54% received genetic testing and 25% received a genetics consult. When compared to those without, cases with genetic specialist involvement were associated with significant increases in testing uptake (p=0.007), diagnostic yield (p<0.001), and patient education (p<0.001). Second trimester stillbirths and those with fewer ultrasound (US) abnormalities were less likely to receive genetic testing (both p values <0.001) and consults (p<0.001, p=0.020). Conclusion Though it was not possible to avoid ascertainment bias, this data demonstrates that geneticist involvement correlates with a higher rate of testing, greater diagnostic yield, and more thorough counseling. These findings underscore the importance of integrating genetics providers into perinatal postmortem healthcare teams.
Wiseman, J.; Sibley, S.; Perez-Patrigeon, S.; Mekhaeil, M.; Hanley, M.; Hunt, M.; Boyd, T.; Grant, B.; Boyd, J. G.
Show abstract
IntroductionThere is increasing interest in the peripheral administration of vasopressors for two main reasons: (1) to expedite vasopressor initiation in patients with refractory shock and (2) to avoid the potential complications associated with central venous catheter placement. The current evidence on the use of peripheral vasopressor administration is primarily based on single-center observational studies. There are inconsistencies in the administration of peripheral vasopressors, including catheter gauge and location, monitoring practices, vasopressor concentrations, and duration of use. This has made it difficult for institutions to develop best practice guidelines. A randomized controlled trial is needed to address this knowledge gap. Methods and analysisThe Peripheral Use of Low-dose Vasopressors for Safety and Efficacy (PULSE) in the intensive care unit is a prospective, unblinded feasibility study. Eligible patients will be 18 years or older, have no existing central venous catheter or peripherally inserted central catheter and have the presence of shock requiring a minimum vasopressor dose of any of the following: norepinephrine 0.0625 mcg/kg/min, phenylephrine 0.625 mcg/kg/min, and epinephrine 0.0625 mcg/kg/min. Fifty patients will be randomized 1:1 into either the peripheral venous catheter or central venous catheter group. The primary outcome is feasibility, defined as (1) a recruitment rate of 4 participants per month, (2) a data capture rate of [≥]90%, and (3) a <50% conversion rate from peripheral to central access. The secondary outcomes include the safety of peripheral vasopressor use, alive and central-line-free days, the number of attempts needed to place a catheter, volume status, in-hospital mortality rate, ICU and hospital length of stay, and patient-centred important outcomes. ImplicationsThe data collected from this study will inform the design of a definitive randomized controlled trial to assess the safety and efficacy of protocol-driven peripheral vasopressor administration. Ethics and disseminationThis study received approval (6042888) from the Queens University Health Sciences/Affiliated Teaching Hospitals Research Ethics Boards. Results of this study will be presented at critical care conferences and submitted for publication. Trial registration numberNCT06920173 (https://clinicaltrials.gov/study/NCT06920173).
Barve, R.; Gowda, D.; Illiayaraja, K. J.
Show abstract
Abstract: Purpose: Recurrence in high grade glioma (HGG) predominantly occurs within the high dose radiation field, raising the question of whether treatment failure reflects limitations in radiation target delineation or is driven by intrinsic tumor biology. This study evaluated recurrence patterns following standard chemoradiotherapy and their treatment implications. Material and Methods: This retrospective single center study included 41 patients with histologically confirmed HGG treated with surgery followed by radiotherapy with concurrent and adjuvant temozolomide (TMZ). Patients were followed through August 2018; those with recurrence were included in the analysis. Recurrence patterns were classified based on their spatial relationship to the 60 Gy isodose line as central, infield, marginal, or distant. Survival outcomes were estimated using the Kaplan-Meier method and compared using the log rank test. Results: The most common pattern of recurrence was central (15 patients, 36.5%), followed by infield (11, 26.8%), distant (6, 14.6%), marginal (5, 12.1%), and multicentric (4, 9.8%). Central and in field recurrences (local failures) accounted for 26 patients (63%). Median overall survival (OS) was 27 months, and median progression-free survival (PFS) was 12 months. Survival differed significantly by recurrence pattern (log-rank p = 0.018), with marginal recurrence associated with more favorable outcomes. Conclusion: The predominance of central and infield recurrences within the high-dose region suggests that treatment failure in HGG is not solely explained by inadequate target delineation and may also be driven, in part, by intrinsic tumor biology, including radioresistant subpopulations and tumor heterogeneity. Future strategies may benefit from incorporating biologically guided approaches alongside optimization of radiation treatment parameters.